Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "Frontier Model"


17 mentions found


So we have to talk about the drama that has been playing out in the past week between OpenAI and Elon Musk. According to OpenAI, Elon Musk wanted majority, equity, initial board control, and to be CEO of this new for-profit subsidiary. It’s basically —casey newtonIt’s like, I’m going to find a way to follow your rule, but in the worst way possible. Like, working was one I thought that, oh, I’m going to work in this all the time. kevin roose[LAUGHS]: Well, I thought, like, I’m going to take some spatial videos.
Persons: casey newton Casey, kevin roose, casey newton, Kevin, casey newton What’s, Kevin Roose, Casey Newton, OpenAI, Will, Joanna Stern, Casey, it’s, kevin roose I’m, Elon Musk, It’s, casey newton Let’s, Elon, he’s, I’ve, casey newton What’d, there’s, you’ve, we’re, GPT, Sam Altman’s, that’s, AGI, Annie “, Sam Altman, who’s, isn’t, , we’ve, ” casey newton Go, He’s, Ilya Sutskever, Greg Brockman, Ilya, casey newton Yes, Tesla, casey newton Well, they’ll, casey newton Oh, kevin roose It’s, don’t, kevin roose Will, casey newton Right, casey newton Mhm, kevin roose They’re, Microsoft’s Bing, Microsoft Bing, Bing, Apple, Europe — casey newton, Charles Duhigg, John Gruber, they’ve, casey newton It’d, — casey newton, they’re, They’ve, you’ll, Apple’s, casey newton It’s, I’ll, casey newton Sure, GDPR, you’re, kevin roose Really, let’s, kevin roose Casey, kevin roose —, Jonah Stern, casey newton Wow, Joanna, Let’s, kevin roose Joanna Stern, joanna, casey newton Hi, kevin roose Long, joanna stern, , kevin roose We’re, Kara Swisher, kevin roose Don’t, I’m, casey newton Don’t, casey newton That’s, Neil Patel, Um, kevin roose That’s, kevin roose Sure, casey newton Great, KEVIN, IV, wearables, Fitbits, kevin roose Oh, hadn’t, casey newton —, casey newton I’ve, Joe Rogan Organizations: The New York Times, Elon, Apple’s, OpenAI, Microsoft, Google, Google’s, Facebook, Tesla, Big, European Union, Digital Services, Giants, Apple, Digital Markets, EU, Bloomberg, Digital, Spotify, General, Apple Vision Pro, Street, Apple Vision, Vision, New York Times, , Housewives, Club, Ray, Tesla Chargers, Vision Pro, Apple Watch, Sony Locations: Los Angeles, Europe, what’s, Elon, OpenAI, Japan, South Korea, Turkey, United Kingdom, United States, Reddit, American, America, California, Florida, United, Mars, The
Read previewThe firing and rehiring of OpenAI CEO Sam Altman has undone months of effort by Microsoft to avoid antitrust regulators probing its massive investment in the startup. It's tough to keep a huge business partnership like this out of what can be intense scrutiny from antitrust regulators. Nadella agreed to give Altman and Brockman their own research arm at Microsoft, if he couldn't negotiate their return to OpenAI. Another interpretation is that Microsoft is keen to show antitrust regulators that OpenAI is an independent company, and not controlled by the software giant. AdvertisementDo you work for OpenAI or Microsoft, or are you someone with a tip or insight to share?
Persons: , Sam Altman, Lina Khan, OpenAI, Altman, Satya Nadella, Kevin Scott didn't, Kevin, Satya, Microsoft's, Brad Smith, Frank Shaw, Sam, Nadella, Altman's, Greg Brockman, Brockman, Amy Hood, ChatGPT, doesn't, Kali Hays, Ashley Stewart, Darius Rafieyan Organizations: Service, Microsoft, Business, FTC, OpenAI, Activision, Blizzard, Markets, Bloomberg, Chief Locations: OpenAI, khays@insider.com, astewart@insider.com
AI was a major focus of questions from Microsoft investors during the event. AdvertisementOn Thursday, Microsoft executives made a point to assure investors that it has many irons in the fire when it comes to AI, not just OpenAI. Microsoft Chief Financial Officer Amy Hood jumped in to emphasize the company has AI partners beyond OpenAI. AdvertisementAs Business Insider recently reported, the chaos at OpenAI caused some partners to start looking for a "plan B" for their AI model needs. Are you an OpenAI, or Microsoft employee, or someone with a tip or insight to share?
Persons: , OpenAI, Sam Altman, Altman, Satya Nadella, Nadella, Amy Hood, Hood, Ashley Stewart Organizations: Service, Business, Microsoft Locations: OpenAI, astewart@insider.com
These two diverging camps — the open and the closed — disagree about whether to build AI in a way that makes the underlying technology widely accessible. "So it’s not like a thing that is locked in a barrel and no one knows what they are.”Political Cartoons View All 1277 ImagesWHAT'S OPEN-SOURCE AI? Part of the confusion around open-source AI is that despite its name, OpenAI — the company behind ChatGPT and the image-generator DALL-E — builds AI systems that are decidedly closed. An increasingly public debate has emerged over the benefits or dangers of adopting an open-source approach to AI development. Weights are numerical parameters that influence how an AI model performs.
Persons: they’re, That's, , Darío Gil, Alliance —, ” Gil, OpenAI, Ilya Sutskever, there's, David Evan Harris, Harris, , Oppenheimer ’, Camille Carlton, Yann LeCun, LeCun, fearmongering, ” LeCun, Chris Padilla, Joe Biden's, Gina Raimondo Organizations: Tech, Meta, IBM, Alliance, Google, Microsoft, Dell, Sony, AMD, Intel, Associated Press, Stanford University, University of California, for Humane Technology, Frontier Model, Windows, Commerce, European Locations: Berkeley
Meta and IBM have launched an alliance that's calling for an "open science" approach to AI development. Facebook parent Meta and IBM on Tuesday launched a new group called the AI Alliance that's advocating for an "open science" approach to AI development that puts them at odds with rivals Google, Microsoft and ChatGPT-maker OpenAI. AdvertisementPart of the confusion around open-source AI is that despite its name, OpenAI — the company behind ChatGPT and the image-generator DALL-E — builds AI systems that are decidedly closed. An increasingly public debate has emerged over the benefits or dangers of adopting an open-source approach to AI development. Biden's order described open models with the technical name of "dual-use foundation models with widely available weights" and said they needed further study.
Persons: , they're, That's, Darío Gil, Alliance —, Gil, OpenAI, Ilya Sutskever, there's, David Evan Harris, Harris, Oppenheimer, Camille Carlton, Yann LeCun, LeCun, fearmongering, Chris Padilla, Joe Biden's, Gina Raimondo Organizations: Meta, IBM, Google, Microsoft, Service, Tech, Alliance, Dell, Sony, AMD, Intel, Associated Press, Stanford University, University of California, for Humane Technology, Frontier Model, Windows, Commerce, European Locations: Berkeley
AI threat demands new approach to security designs -US official
  + stars: | 2023-11-27 | by ( ) www.reuters.com   time to read: +2 min
AI (Artificial Intelligence) letters are placed on computer motherboard in this illustration taken, June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo Acquire Licensing RightsOTTAWA, Nov 27 (Reuters) - The potential threat posed by the rapid development of artificial intelligence (AI) means safeguards need to be built in to systems from the start rather than tacked on later, a top U.S. official said on Monday. "We've normalized a world where technology products come off the line full of vulnerabilities and then consumers are expected to patch those vulnerabilities. We can't live in that world with AI," said Jen Easterly, director of the U.S. Cybersecurity and Infrastructure Security Agency. "We have to look at security throughout the lifecycle of that AI capability," Khoury said.
Persons: Dado Ruvic, Jen, Sami Khoury, Khoury, David Ljunggren, Matthew Lewis Organizations: REUTERS, Rights OTTAWA, U.S, Cybersecurity, Infrastructure Security Agency, Canada's, Cyber Security, Thomson Locations: Ottawa, United States, British
LISBON, Nov 14 (Reuters) - Chelsea Manning, a former U.S. army analyst and WikiLeaks source, said on Tuesday that technology tools can be more efficient in protecting people's privacy and information than legal or regulatory mechanisms that risk being tampered with. "I believe very strongly that there are technical means of protecting information and those are more reliable," Manning told Reuters in an interview during Europe's largest technology conference, the Web Summit, in Lisbon, Portugal. Manning currently works as a security consultant at Nym Technologies, a network that aims to prevent governments and companies from tracking people's online activities. 'SIDESTEPPING ETHICS'Artificial intelligence (AI) is the big topic at this year's Web Summit, which draws tens of thousands of participants and high-level speakers from global tech companies, as well as politicians. Reporting by Catarina Demony; Additional reporting by Supantha Mukherjee; Editing by Aurora EllisOur Standards: The Thomson Reuters Trust Principles.
Persons: Chelsea Manning, Manning, Barack Obama, Catarina Demony, Supantha Mukherjee, Aurora Ellis Organizations: WikiLeaks, Reuters, Web, Nym Technologies, Thomson Locations: LISBON, U.S, Lisbon, Portugal, Iraq
Where it's being heldThe AI summit will be held in Bletchley Park, the historic landmark around 55 miles north of London. What it seeks to addressThe main objective of the U.K. AI summit is to find some level of international coordination when it comes to agreeing some principles on the ethical and responsible development of AI models. The British government wants the AI Summit to serve as a platform to shape the technology's future. They say that, by keeping the summit restricted to only frontier AI models, it is a missed opportunity to encourage contributions from members of the tech community beyond frontier AI. "By focusing only on companies that are currently building frontier models and are leading that development right now, we're also saying no one else can come and build the next generation of frontier models."
Persons: Rishi Sunak, Peter Nicholls, Rishi Sunak's, ChatGPT, Getty, codebreakers, Alan Turing, It's, Kamala Harris, Saul Loeb, Brad Smith, Sam Altman, Global Affairs Nick Clegg, Ursula von der, Emmanuel Macron, Joe Biden, Justin Trudeau, Olaf Scholz, Sunak, , Xi Jinping, Biden, James Manyika, Manyika, Mostaque, we're, Sachin Dev Duggal, Carl Court Organizations: Royal Society, Carlton, Getty, U.S, Microsoft, Coppin State University, AFP, Meta, Global Affairs, Global Affairs Nick Clegg U.S, Ministry of Science, Technology European, Joe Biden Canadian, Britain, Afp, Getty Images Washington, U.S ., Google, CNBC, Big Tech Locations: London, China, Bletchley Park, British, America, Baltimore , Maryland, Chesnot, U.S, Nusa Dua, Indonesian, Bali, EU
The Frontier Model Forum also said it created a fund to back research into the technology, with initial funding commitments of more than $10 million from its backers and partners. It said its first-ever director would be Chris Meserole, who most recently served as director of AI and emerging technology initiative at the Brookings Institution, a Washington-based think tank. He joins a forum launched in July with a focus on "frontier AI models" that exceed the capabilities present in the most advanced existing models. Industry leaders have warned that such models could have dangerous capabilities sufficient to pose severe risks to public safety. The Frontier Model Forum is backed by ChatGPT-owner OpenAI, Microsoft (MSFT.O), Google's parent Alphabet (GOOGL.O) and AI startup Anthropic.
Persons: Dado Ruvic, Chris Meserole, OpenAI, Juby Babu, Savio D'Souza Organizations: REUTERS, Microsoft, Google, Brookings Institution, Industry, Britain, ChatGPT, Thomson Locations: Washington, Bengaluru
Nvidia and a number of other chipmakers saw shares fall Tuesday morning after the U.S. announced new restrictions on exports of artificial intelligence chips to China. Shares of chip stocks have boomed in the last year due to the increased demand for AI products and services, which is powered by AI chips. The new restrictions on exports to China are a step up from previously announced restrictions on artificial intelligence chips that the Biden administration had implemented over the last year. The new restrictions ban the sale of the slowed-down version of Nvidia chips, the H800 and A800, that were allowed to be exported to China under the old restrictions. Nvidia believes that the increased restrictions will not immediately lead to a material effect on its financial performance.
Persons: Biden, Gina Raimondo Organizations: Nvidia, U.S, Broadcom, Marvell, Intel, U.S . Commerce, CNBC Locations: China, U.S
The U.S. Department of Commerce announced Tuesday that it plans to prevent the sale of more advanced artificial intelligence chips to China in the coming weeks. Those earlier restrictions banned the sale of the Nvidia H100, which is the processor of choice for AI firms in the U.S. such as OpenAI. The new rules will ban those chips as well, senior administration officials said in a briefing with reporters. Other rules will likely hamper the sale and export to China of semiconductor manufacturing equipment from companies such as Applied Materials , Lam and KLA. Companies that want to export AI chips to China or other embargoed regions will have to notify the U.S. government.
Persons: Gina Raimondo, They're, Raimondo, " Raimondo, — CNBC's Kristina Partsinevelos Organizations: Nvidia, U.S . Department of Commerce, Broadcom, Marvell, AMD, Intel, KLA, U.S, CNBC, . Commerce, ., Commerce Department Locations: Santa Clara , California, China, U.S, Macao, United States
Four leading artificial intelligence companies launched a new industry group on Wednesday to identify best safety practices and promote the technology's use toward great societal challenges. The group underscores how, until policymakers come up with new rules, the industry will likely need to continue to police themselves. Anthropic, Google , Microsoft and OpenAI said the new Frontier Model Forum had four key goals, which Google outlined in a blog post:1. Advancing AI safety research to promote responsible development of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety. Identifying best practices for the responsible development and deployment of frontier models, helping the public understand the nature, capabilities, limitations, and impact of the technology.
Persons: Satya Nadella, OpenAI, Chuck Schumer Organizations: Microsoft, Google, Frontier, CNBC, YouTube Locations: Redmond , Washington, coders, India
July 26 (Reuters) - OpenAI, Microsoft (MSFT.O), Alphabet's (GOOGL.O) Google and Anthropic are launching a forum to regulate the development of large machine learnings models, the industry leaders of artificial intelligence said on Wednesday. The group will focus on ensuring safe and responsible development of what is called "frontier AI models" that exceed the capabilities present in the most advanced existing models. They are highly capable foundation models that could have dangerous capabilities sufficient to pose severe risks to public safety. Generative AI models, like the one behind chatbots like ChatGPT, extrapolate large amounts of data at high speed to share responses in the form of prose, poetry and images. The industry body, Frontier Model Forum, will work to advance AI safety research, identify best practices for deployment of frontier AI models and work with policymakers, academic and companies.
Persons: Sam Altman, Brad Smith, Chavi Mehta, Jeffrey Dastin, Arun Koyyur Organizations: Microsoft, Google, European Union, Frontier Model, Thomson Locations: Bengaluru, Palo Alto
The new organization, known as the Frontier Model Forum, was announced Wednesday by Google, Microsoft, OpenAI and Anthropic. The companies said the forum’s mission would be to develop best practices for AI safety, promote research into AI risks, and to publicly share information with governments and civil society. Wednesday’s announcement reflects how AI developers are coalescing around voluntary guardrails for the technology ahead of an expected push this fall by US and European Union lawmakers to craft binding legislation for the industry. “Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control,” said Microsoft president Brad Smith. “In particular, I am concerned that AI systems could be misused on a grand scale in the domains of cybersecurity, nuclear technology, chemistry, and especially biology,” Amodei said in his written testimony.
Persons: Biden, , Brad Smith, Dario Amodei, Bengio, ” Amodei, Amodei, Chuck Schumer, Schumer Organizations: CNN, Frontier Model, Google, Microsoft, US, European Union, Amazon, Meta, Companies, European
But Microsoft is making clear that it's not strictly an OpenAI shop when it comes to generative AI. On Tuesday at its Inspire conference, the company said it's making Meta's new AI large language model, dubbed Llama 2, available on its Azure cloud computing service. Meta said in a blog post that Microsoft is its "preferred partner" for its Llama 2 software, which is available for free for companies and researchers. "Meta and Microsoft share a commitment to democratizing AI and its benefits and we are excited that Meta is taking an open approach with Llama 2," Microsoft said. Llama 2 will also be available through Amazon Web Services and Hugging Face, a popular service used by AI researchers.
Persons: Meta, Mark Zuckerberg, Reid Hoffman Organizations: Microsoft, Inspire, Companies, Meta, Amazon Web Services, OpenAI Locations: OpenAI
Microsoft shares climbed to a record Thursday after analysts at JPMorgan Chase touted the software maker's growth prospects in artificial intelligence. AI has been a hot topic all year, after Microsoft-backed OpenAI in November released the ChatGPT chatbot, which quickly went viral. In the past four quarters, Microsoft has generated almost $208 billion in total revenue. Negative sentiment around cloud growth and a contracting PC market led to pessimism on Wall Street last year. But the excitement around AI in addition to the cost-cutting measures that tech companies implemented produced a renewed bullishness.
Persons: Bing Chatbot, Satya Nadella, Amy Hood, Kevin Scott, Hood, Scott, MSFT Organizations: Microsoft, JPMorgan Chase, Nasdaq, Tech, JPMorgan, Security Locations: OpenAI
After the hearing, he summed up his stance on AI regulation, using terms that are not widely known among the general public. "AGI safety is really important, and frontier models should be regulated," Altman tweeted. Large language models, like OpenAI's GPT-4, are frontier models, as compared to smaller AI models that perform specific tasks like identifying cats in photos. Some are more concerned about what they call "AI safety." "There must be clear guidance on AI end uses or categories of AI-supported activity that are inherently high-risk," Montgomery told Congress.
Total: 17